35 research outputs found

    Disconnection of network hubs and cognitive impairment after traumatic brain injury.

    Get PDF
    Traumatic brain injury affects brain connectivity by producing traumatic axonal injury. This disrupts the function of large-scale networks that support cognition. The best way to describe this relationship is unclear, but one elegant approach is to view networks as graphs. Brain regions become nodes in the graph, and white matter tracts the connections. The overall effect of an injury can then be estimated by calculating graph metrics of network structure and function. Here we test which graph metrics best predict the presence of traumatic axonal injury, as well as which are most highly associated with cognitive impairment. A comprehensive range of graph metrics was calculated from structural connectivity measures for 52 patients with traumatic brain injury, 21 of whom had microbleed evidence of traumatic axonal injury, and 25 age-matched controls. White matter connections between 165 grey matter brain regions were defined using tractography, and structural connectivity matrices calculated from skeletonized diffusion tensor imaging data. This technique estimates injury at the centre of tract, but is insensitive to damage at tract edges. Graph metrics were calculated from the resulting connectivity matrices and machine-learning techniques used to select the metrics that best predicted the presence of traumatic brain injury. In addition, we used regularization and variable selection via the elastic net to predict patient behaviour on tests of information processing speed, executive function and associative memory. Support vector machines trained with graph metrics of white matter connectivity matrices from the microbleed group were able to identify patients with a history of traumatic brain injury with 93.4% accuracy, a result robust to different ways of sampling the data. Graph metrics were significantly associated with cognitive performance: information processing speed (R(2) = 0.64), executive function (R(2) = 0.56) and associative memory (R(2) = 0.25). These results were then replicated in a separate group of patients without microbleeds. The most influential graph metrics were betweenness centrality and eigenvector centrality, which provide measures of the extent to which a given brain region connects other regions in the network. Reductions in betweenness centrality and eigenvector centrality were particularly evident within hub regions including the cingulate cortex and caudate. Our results demonstrate that betweenness centrality and eigenvector centrality are reduced within network hubs, due to the impact of traumatic axonal injury on network connections. The dominance of betweenness centrality and eigenvector centrality suggests that cognitive impairment after traumatic brain injury results from the disconnection of network hubs by traumatic axonal injury

    Voltage imaging of waking mouse cortex reveals emergence of critical neuronal dynamics.

    Get PDF
    Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain

    Parcels and particles: Markov blankets in the brain

    Get PDF
    At the inception of human brain mapping, two principles of functional anatomy underwrote most conceptions—and analyses—of distributed brain responses: namely, functional segregation and integration. There are currently two main approaches to characterizing functional integration. The first is a mechanistic modeling of connectomics in terms of directed effective connectivity that mediates neuronal message passing and dynamics on neuronal circuits. The second phenomenological approach usually characterizes undirected functional connectivity (i.e., measurable correlations), in terms of intrinsic brain networks, self-organized criticality, dynamical instability, and so on. This paper describes a treatment of effective connectivity that speaks to the emergence of intrinsic brain networks and critical dynamics. It is predicated on the notion of Markov blankets that play a fundamental role in the self-organization of far from equilibrium systems. Using the apparatus of the renormalization group, we show that much of the phenomenology found in network neuroscience is an emergent property of a particular partition of neuronal states, over progressively coarser scales. As such, it offers a way of linking dynamics on directed graphs to the phenomenology of intrinsic brain networks

    Cascades and Cognitive State: Focused Attention Incurs Subcritical Dynamics

    No full text
    The analysis of neuronal avalanches supports the hypothesis that the human cortex operates with critical neural dynamics. Here, we investigate the relationship between cascades of activity in electroencephalogram data, cognitive state, and reaction time in humans using a multimodal approach. We recruited 18 healthy volunteers for the acquisition of simultaneous electroencephalogram and functional magnetic resonance imaging during both rest and during a visuomotor cognitive task. We compared distributions of electroencephalogram-derived cascades to reference power laws for task and rest conditions. We then explored the large-scale spatial correspondence of these cascades in the simultaneously acquired functional magnetic resonance imaging data. Furthermore, we investigated whether individual variability in reaction times is associated with the amount of deviation from power law form. We found that while resting state cascades are associated with approximate power law form, the task state is associated with subcritical dynamics. Furthermore, we found that electroencephalogram cascades are related to blood oxygen level-dependent activation, predominantly in sensorimotor brain regions. Finally, we found that decreased reaction times during the task condition are associated with increased proximity to power law form of cascade distributions. These findings suggest that the resting state is associated with near-critical dynamics, in which a high dynamic range and a large repertoire of brain states may be advantageous. In contrast, a focused cognitive task induces subcritical dynamics, which is associated with a lower dynamic range, which in turn may reduce elements of interference affecting task performance

    Cortical Entropy, Mutual Information and Scale-Free Dynamics in Waking Mice

    Get PDF
    Some neural circuits operate with simple dynamics characterized by one or a few well-defined spatiotemporal scales (e.g. central pattern generators). In contrast, cortical neuronal networks often exhibit richer activity patterns in which all spatiotemporal scales are represented. Such "scale-free" cortical dynamics manifest as cascades of activity with cascade sizes that are distributed according to a power-law. Theory and in vitro experiments suggest that information transmission among cortical circuits is optimized by scale-free dynamics. In vivo tests of this hypothesis have been limited by experimental techniques with insufficient spatial coverage and resolution, i.e., restricted access to a wide range of scales. We overcame these limitations by using genetically encoded voltage imaging to track neural activity in layer 2/3 pyramidal cells across the cortex in mice. As mice recovered from anesthesia, we observed three changes: (a) cortical information capacity increased, (b) information transmission among cortical regions increased and (c) neural activity became scale-free. Our results demonstrate that both information capacity and information transmission are maximized in the awake state in cortical regions with scale-free network dynamics

    Conservation laws by virtue of scale symmetries in neural systems

    Get PDF
    In contrast to the symmetries of translation in space, rotation in space, and translation in time, the known laws of physics are not universally invariant under transformation of scale. However, a special case exists in which the action is scale invariant if it satisfies the following two constraints: 1) it must depend upon a scale-free Lagrangian, and 2) the Lagrangian must change under scale in the same way as the inverse time, . Our contribution lies in the derivation of a generalised Lagrangian, in the form of a power series expansion, that satisfies these constraints. This generalised Lagrangian furnishes a normal form for dynamic causal models–state space models based upon differential equations–that can be used to distinguish scale symmetry from scale freeness in empirical data. We establish face validity with an analysis of simulated data, in which we show how scale symmetry can be identified and how the associated conserved quantities can be estimated in neuronal time series

    Rendering neuronal state equations compatible with the principle of stationary action

    Get PDF
    The principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the first-order linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, anc a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems – and to exploit the computational expediency facilitated by direct variational techniques

    Scaling conservation laws in neural systems

    No full text
    In contrast to the symmetries of translation in space, rotation in space, and translation in time, the known laws of physics are not universally invariant under transformation of scale. However, the action can be invariant under change of scale in the special case of a dynamical system described by a Lagrangian that changes under scale in the same way as the inverse time, 1/�. Crucially, this means symmetries under change of scale can exist in dynamical systems under certain constraints. Our contribution lies in the derivation of a generalised scale invariant Lagrangian – in the form of a power series expansion – that satisfies these constraints. This generalised Lagrangian furnishes a normal form for dynamic causal models (i.e., state space models based upon differential equations) that can be used to distinguish scale invariance (scale symmetry) from scale freeness in empirical data. We establish face validity with an analysis of simulated data and then show how scale invariance can be identified – and how the associated conserved quantities can be estimated – in neuronal timeseries

    Neural systems under change of scale

    Get PDF
    We derive a theoretical construct that allows for the characterisation of both scalable and scale free systems within the Dynamic Causal Modelling framework. We define a dynamical system to be ‘scalable’ if the same equation of motion continues to apply as the system changes in size. As an example of such a system, we simulate planetary orbits varying in size and show that our proposed methodology can be used to recover Kepler’s third law from the timeseries. In contrast, a ‘scale free’ system is one in which there is no characteristic length scale, meaning that images of such a system are statistically unchanged at different levels of magnification. As an example of such a system, we use calcium imaging collected in murine cortex and show that the dynamical critical exponent, as defined in renormalization group theory, can be estimated in an empirical biological setting. We find that a task-relevant region of the cortex is associated with higher dynamical critical exponents in task vs. spontaneous states and vice versa for a task-irrelevant region

    A complex systems perspective on neuroimaging studies of behavior and its disorders

    Get PDF
    The study of complex systems deals with emergent behavior that arises as a result of nonlinear spatiotemporal interactions between a large number of components both within the system, as well as between the system and its environment. There is a strong case to be made that neural systems as well as their emergent behavior and disorders can be studied within the framework of complexity science. In particular, the field of neuroimaging has begun to apply both theoretical and experimental procedures originating in complexity science—usually in parallel with traditional methodologies. Here, we illustrate the basic properties that characterize complex systems and evaluate how they relate to what we have learned about brain structure and function from neuroimaging experiments. We then argue in favor of adopting a complex systems-based methodology in the study of neuroimaging, alongside appropriate experimental paradigms, and with minimal influences from noncomplex system approaches. Our exposition includes a review of the fundamental mathematical concepts, combined with practical examples and a compilation of results from the literature
    corecore